39 research outputs found

    On inferring intentions in shared tasks for industrial collaborative robots

    Get PDF
    Inferring human operators' actions in shared collaborative tasks, plays a crucial role in enhancing the cognitive capabilities of industrial robots. In all these incipient collaborative robotic applications, humans and robots not only should share space but also forces and the execution of a task. In this article, we present a robotic system which is able to identify different human's intentions and to adapt its behavior consequently, only by means of force data. In order to accomplish this aim, three major contributions are presented: (a) force-based operator's intent recognition, (b) force-based dataset of physical human-robot interaction and (c) validation of the whole system in a scenario inspired by a realistic industrial application. This work is an important step towards a more natural and user-friendly manner of physical human-robot interaction in scenarios where humans and robots collaborate in the accomplishment of a task.Peer ReviewedPostprint (published version

    Task-driven active sensing framework applied to leaf probing

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This article presents a new method for actively exploring a 3D workspace with the aim of localizing relevant regions for a given task. Our method encodes the exploration route in a multi-layer occupancy grid map. This map, together with a multiple-view estimator and a maximum-information-gain gathering approach, incrementally provide a better understanding of the scene until reaching the task termination criterion. This approach is designed to be applicable to any task entailing 3D object exploration where some previous knowledge of its approximate shape is available. Its suitability is demonstrated here for a leaf probing task using an eye-in-hand arm configuration in the context of a phenotyping application (leaf probing).Peer ReviewedPostprint (author's final draft

    Exploitation of time-of-flight (ToF) cameras

    Get PDF
    This technical report reviews the state-of-the art in the field of ToF cameras, their advantages, their limitations, and their present-day applications sometimes in combination with other sensors. Even though ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight and reduced power consumption have motivated their use in numerous areas of research. In robotics, these areas range from mobile robot navigation and map building to vision-based human motion capture and gesture recognition, showing particularly a great potential in object modeling and recognition.Preprin

    Garment manipulation dataset for robot learning by demonstration through a virtual reality framework

    Get PDF
    .Being able to teach complex capabilities, such as folding garments, to a bi-manual robot is a very challenging task, which is often tackled using learning from demonstration datasets. The few garment folding datasets available nowadays to the robotics research community are either gathered from human demonstrations or generated through simulation. The former have the huge problem of perceiving human action and transferring it to the dynamic control of the robot, while the latter requires coding human motion into the simulator in open loop, resulting in far-from-realistic movements. In this article, we present a reduced but very accurate dataset of human cloth folding demonstrations. The dataset is collected through a novel virtual reality (VR) framework we propose, based on Unity’s 3D platform and the use of a HTC Vive Pro system. The framework is capable of simulating very realistic garments while allowing users to interact with them, in real time, through handheld controllers. By doing so, and thanks to the immersive experience, our framework gets rid of the gap between the human and robot perception-action loop, while simplifying data capture and resulting in more realistic sampleThis work was developed in the context of the project CLOTHILDE (”CLOTH manIpulation Learning from DEmonstrations”) which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 741930) and is also supported by the BURG project PCI2019-103447 funded by MCIN/ AEI /10.13039/501100011033 and by the ”European Union”.Peer ReviewedPostprint (published version

    Robots and IoT devices for assistive automation

    Get PDF
    In recent years, we live every day in an ever more connected world. This phenomenon has developed the technologies of the Internet of Things (IoT). At the same time, advances in robotics now make it possible to have autonomous service robots in their mobility and in the accomplishment of their missions. So far the question is whether it might be useful to unite these two areas. Just as humans do, how do these service robots integrate the objects of IoT into their environment to facilitate the fulfillment of their mission? This report focuses on how it is possible to integrate IoT objects into the environment of a PAL Robotics robot, TIAGo. It is the result of twelve weeks of internship within the laboratory of Perception and Manipulation of the Institut de Rob`otica i Inform`atica Industrial of Barcelona.Postprint (updated version

    Knowledge representation for explainability in collaborative robotics and adaptation

    Get PDF
    Autonomous robots are going to be used in a large diversity of contexts, interacting and/or collaborating with humans, who will add uncertainty to the collaborations and cause re-planning and adaptations to the execution of robots’ plans. Hence, trustworthy robots must be able to store and retrieve relevant knowledge about their collaborations and adaptations. Furthermore, they shall also use that knowledge to generate explanations for human collaborators. A reasonable approach is first to represent the domain knowledge in triples using an ontology, and then generate natural language explanations from the stored knowledge. In this article, we propose ARE-OCRA, an algorithm that generates explanations about target queries, which are answered by a knowledge base built using an Ontology for Collaborative Robotics and Adaptation (OCRA). The algorithm first queries the knowledge base to retrieve the set of sufficient triples that would answer the queries. Then, it generates the explanation in natural language using the triples. We also present the implementation of the core algorithm’s routine: construct explanation, which generates the explanations from a set of given triples. We consider three different levels of abstraction, being able to generate explanations for different uses and preferences. This is different from most of the literature works that use ontologies, which only provide a single type of explanation. The least abstract level, the set of triples, is intended for ontology experts and debugging, while the second level, aggregated triples, is inspired by other literature baselines. Finally, the third level of abstraction, which combines the triples’ knowledge and the natural language definitions of the ontological terms, is our novel contribution. We showcase the performance of the implementation in a collaborative robotic scenario, showing the generated explanations about the set of OCRA’s competency questions. This work is a step forward to explainable agency in collaborative scenarios where robots adapt their plans.Peer ReviewedPostprint (published version

    Reward-weighted GMM and its application to action-selection in robotized shoe dressing

    Get PDF
    The final publication is available at link.springer.comIn the context of assistive robotics, robots need to make multiple decisions. We explore the problem where a robot has multiple choices to perform a task and must select the action that maximizes success probability among a repertoire of pre-trained actions. We investigate the case in which sensory data is only available before making the decision, but not while the action is being performed. In this paper we propose to use a Gaussian Mixture Model (GMM) as decision-making system. Our adaptation permits the initialization of the model using only one sample per component. We also propose an algorithm to use the result of each execution to update the model, thus adapting the robot behavior to the user and evaluating the effectiveness of each pre-trained action. The proposed algorithm is applied to a robotic shoe-dressing task. Simulated and real experiments show the validity of our approach.Peer ReviewedPostprint (author's final draft

    Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: analysis and comparison

    Get PDF
    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposure times of the sensors. Performance of three different ToF cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancelation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of ToF cameras for a scene involving both shadow and sunlight exposures at the same time by taking advantage of camera flags (PMD) or confidence matrix (SwissRanger). (C) 2013 International Society for Photogrammetly and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.Peer ReviewedPostprint (author’s final draft

    Robot explanatory narratives of collaborative and adaptive experiences

    Get PDF
    © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksIn the future, robots are expected to autonomously interact and/or collaborate with humans, who will increase the uncertainty during the execution of tasks, provoking online adaptations of robots' plans. Hence, trustworthy robots must be able to store, retrieve and narrate important knowledge about their collaborations and adaptations. In this article, it is proposed a sound methodology that integrates three main elements. First, an ontology for collaborative robotics and adaptation to model the domain knowledge. Second, an episodic memory for time-indexed knowledge storage and retrieval. Third, a novel algorithm to extract the relevant knowledge and generate textual explanatory narratives. The algorithm produces three different types of outputs, varying the specificity, for diverse uses and preferences. A pilot study was conducted to evaluate the usefulness of the narratives, obtaining promising results. Finally, we discuss how the methodology can be generalized to other ontologies and experiences. This work boosts robot explainability, especially in cases where robots need to narrate the details of their short and long-term past experiences.Peer ReviewedPostprint (author's final draft

    A virtual reality framework for fast dataset creation applied to cloth manipulation with automatic semantic labelling

    Get PDF
    © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Teaching complex manipulation skills, such as folding garments, to a bi-manual robot is a very challenging task, which is often tackled through learning from demonstration. The few datasets of garment-folding demonstrations available nowadays to the robotics research community have been either gathered from human demonstrations or generated through simulation. The former have the great difficulty of perceiving both cloth state and human action as well as transferring them to the dynamic control of the robot, while the latter require coding human motion into the simulator in open loop, i.e., without incorporating the visual feedback naturally used by people, resulting in far-from-realistic movements. In this article, we present an accurate dataset of human cloth folding demonstrations. The dataset is collected through our novel virtual reality (VR) framework, based on Unity’s 3D platform and the use of an HTC Vive Pro system. The framework is capable of simulating realistic garments while allowing users to interact with them in real time through handheld controllers. By doing so, and thanks to the immersive experience, our framework permits exploiting human visual feedback in the demonstrations while at the same time getting rid of the difficulties of capturing the state of cloth, thus simplifying data acquisition and resulting in more realistic demonstrations. We create and make public a dataset of cloth manipulation sequences, whose cloth states are semantically labeled in an automatic way by using a novel low-dimensional cloth representation that yields a very good separation between different cloth configurations.The research leading to these results receives funding from the European Research Council (ERC) from the European Union Horizon 2020 Programme under grant agreement no. 741930 (CLOTHILDE: CLOTH manIpulation Learning from DEmonstrations) and project SoftEnable (HORIZONCL4-2021-DIGITAL-EMERGING-01-101070600). Authors also received funding from project CHLOE-GRAPH (PID2020-118649RB-I00) funded by MCIN/ AEI /10.13039/501100011033 and COHERENT (PCI2020-120718-2) funded by MCIN/ AEI /10.13039/501100011033 and cofunded by the ”European Union NextGenerationEU/PRTR”.Peer ReviewedPostprint (author's final draft
    corecore